14 research outputs found

    Elektronimikroskooppikuvien segmentointi oppivan järjestelmän avulla

    Get PDF
    A great deal of image analysis in today’s materials research is done manually, which can be time consuming and tedious. This thesis is a case study of two image analysis problems in the field of materials science for which automatic methods are developed to aid the analysis process. In the first case, an automatic segmentation method is developed to segment zeolite pores. The method is based on logistic regression combined with sparsity promoting LASSO regularization. It is able to find more pores than humans with better consistency and speed yielding average PAS score of 0.79 and F1 score of 0.89. Much of the error is caused by the fact that humans specify the pore perimeters in various ways, but the method consistently produces similar segmentations. The second case considers automatic segmentation of silver nanoparticles by combining LASSO regularized logistic regression and watershed segmentation. The method is faster than manual segmentation and produces fine segments when particles have little overlap. However, if the amount of overlap is high, the segments are flawed. The performance of the system in terms of PAS metric and F1 score is 0.76 and 0.86, respectively. An interesting property of training based segmentation together with sparsity promoting property is that training data can be collected by anyone. This enables creating an adaptive segmentation software for anyone regardless of image processing experience

    Virtual reality for 3D histology: multi-scale visualization of organs with interactive feature exploration

    Get PDF
    Virtual reality (VR) enables data visualization in an immersive and engaging manner, and it can be used for creating ways to explore scientific data. Here, we use VR for visualization of 3D histology data, creating a novel interface for digital pathology. Our contribution includes 3D modeling of a whole organ and embedded objects of interest, fusing the models with associated quantitative features and full resolution serial section patches, and implementing the virtual reality application. Our VR application is multi-scale in nature, covering two object levels representing different ranges of detail, namely organ level and sub-organ level. In addition, the application includes several data layers, including the measured histology image layer and multiple representations of quantitative features computed from the histology. In this interactive VR application, the user can set visualization properties, select different samples and features, and interact with various objects. In this work, we used whole mouse prostates (organ level) with prostate cancer tumors (sub-organ objects of interest) as example cases, and included quantitative histological features relevant for tumor biology in the VR model. Due to automated processing of the histology data, our application can be easily adopted to visualize other organs and pathologies from various origins. Our application enables a novel way for exploration of high-resolution, multidimensional data for biomedical research purposes, and can also be used in teaching and researcher training

    Spatial analysis of histology in 3D : quantification and visualization of organ and tumor level tissue environment

    Get PDF
    Histological changes in tissue are of primary importance in pathological research and diagnosis. Automated histological analysis requires ability to computationally separate pathological alterations from normal tissue. Conventional histopathological assessments are performed from individual tissue sections, leading to the loss of three-dimensional context of the tissue. Yet, the tissue context and spatial determinants are critical in several pathologies, such as in understanding growth patterns of cancer in its local environment. Here, we develop computational methods for visualization and quantitative assessment of histopathological alterations in three dimensions. First, we reconstruct the 3D representation of the whole organ from serial sectioned tissue. Then, we proceed to analyze the histological characteristics and regions of interest in 3D. As our example cases, we use whole slide images representing hematoxylin-eosin stained whole mouse prostates in a Pten+/- mouse prostate tumor model. We show that quantitative assessment of tumor sizes, shapes, and separation between spatial locations within the organ enable characterizing and grouping tumors. Further, we show that 3D visualization of tissue with computationally quantified features provides an intuitive way to observe tissue pathology. Our results underline the heterogeneity in composition and cellular organization within individual tumors. As an example, we show how prostate tumors have nuclear density gradients indicating areas of tumor growth directions and reflecting varying pressure from the surrounding tissue. The methods presented here are applicable to any tissue and different types of pathologies. This work provides a proof-of-principle for gaining a comprehensive view from histology by studying it quantitatively in 3D.publishedVersionPeer reviewe

    Virtual reality for 3D histology: multi-scale visualization of organs with interactive feature exploration

    Get PDF
    Background Virtual reality (VR) enables data visualization in an immersive and engaging manner, and it can be used for creating ways to explore scientific data. Here, we use VR for visualization of 3D histology data, creating a novel interface for digital pathology to aid cancer research. Methods Our contribution includes 3D modeling of a whole organ and embedded objects of interest, fusing the models with associated quantitative features and full resolution serial section patches, and implementing the virtual reality application. Our VR application is multi-scale in nature, covering two object levels representing different ranges of detail, namely organ level and sub-organ level. In addition, the application includes several data layers, including the measured histology image layer and multiple representations of quantitative features computed from the histology. Results In our interactive VR application, the user can set visualization properties, select different samples and features, and interact with various objects, which is not possible in the traditional 2D-image view used in digital pathology. In this work, we used whole mouse prostates (organ level) with prostate cancer tumors (sub-organ objects of interest) as example cases, and included quantitative histological features relevant for tumor biology in the VR model. Conclusions Our application enables a novel way for exploration of high-resolution, multidimensional data for biomedical research purposes, and can also be used in teaching and researcher training. Due to automated processing of the histology data, our application can be easily adopted to visualize other organs and pathologies from various origins.</p

    Predicting Molecular Phenotypes from Histopathology Images: A Transcriptome-Wide Expression-Morphology Analysis in Breast Cancer

    Get PDF
    Molecular profiling is central in cancer precision medicine but remains costly and is based on tumor average profiles. Morphologic patterns observable in histopathology sections from tumors are determined by the underlying molecular phenotype and therefore have the potential to be exploited for prediction of molecular phenotypes. We report here the first transcriptome-wide expression-morphology (EMO) analysis in breast cancer, where individual deep convolutional neural networks were optimized and validated for prediction of mRNA expression in 17,695 genes from hematoxylin and eosin-stained whole slide images. Predicted expressions in 9,334 (52.75%) genes were significantly associated with RNA sequencing estimates. We also demonstrated successful prediction of an mRNA-based proliferation score with established clinical value. The results were validated in independent internal and external test datasets. Predicted spatial intratumor variabilities in expression were validated through spatial transcriptomics profiling. These results suggest that EMO provides a cost-efficient and scalable approach to predict both tumor average and intratumor spatial expression from histopathology images.Significance: Transcriptome-wide expression morphology deep learning analysis enables prediction of mRNA expression and proliferation markers from routine histopathology whole slide images in breast cancer

    ACROBAT -- a multi-stain breast cancer histological whole-slide-image data set from routine diagnostics for computational pathology

    Full text link
    The analysis of FFPE tissue sections stained with haematoxylin and eosin (H&E) or immunohistochemistry (IHC) is an essential part of the pathologic assessment of surgically resected breast cancer specimens. IHC staining has been broadly adopted into diagnostic guidelines and routine workflows to manually assess status and scoring of several established biomarkers, including ER, PGR, HER2 and KI67. However, this is a task that can also be facilitated by computational pathology image analysis methods. The research in computational pathology has recently made numerous substantial advances, often based on publicly available whole slide image (WSI) data sets. However, the field is still considerably limited by the sparsity of public data sets. In particular, there are no large, high quality publicly available data sets with WSIs of matching IHC and H&E-stained tissue sections. Here, we publish the currently largest publicly available data set of WSIs of tissue sections from surgical resection specimens from female primary breast cancer patients with matched WSIs of corresponding H&E and IHC-stained tissue, consisting of 4,212 WSIs from 1,153 patients. The primary purpose of the data set was to facilitate the ACROBAT WSI registration challenge, aiming at accurately aligning H&E and IHC images. For research in the area of image registration, automatic quantitative feedback on registration algorithm performance remains available through the ACROBAT challenge website, based on more than 37,000 manually annotated landmark pairs from 13 annotators. Beyond registration, this data set has the potential to enable many different avenues of computational pathology research, including stain-guided learning, virtual staining, unsupervised pre-training, artefact detection and stain-independent models

    The ACROBAT 2022 Challenge: Automatic Registration Of Breast Cancer Tissue

    Full text link
    The alignment of tissue between histopathological whole-slide-images (WSI) is crucial for research and clinical applications. Advances in computing, deep learning, and availability of large WSI datasets have revolutionised WSI analysis. Therefore, the current state-of-the-art in WSI registration is unclear. To address this, we conducted the ACROBAT challenge, based on the largest WSI registration dataset to date, including 4,212 WSIs from 1,152 breast cancer patients. The challenge objective was to align WSIs of tissue that was stained with routine diagnostic immunohistochemistry to its H&E-stained counterpart. We compare the performance of eight WSI registration algorithms, including an investigation of the impact of different WSI properties and clinical covariates. We find that conceptually distinct WSI registration methods can lead to highly accurate registration performances and identify covariates that impact performances across methods. These results establish the current state-of-the-art in WSI registration and guide researchers in selecting and developing methods

    Artificial intelligence for diagnosis and Gleason grading of prostate cancer: The PANDA challenge

    Get PDF
    Through a community-driven competition, the PANDA challenge provides a curated diverse dataset and a catalog of models for prostate cancer pathology, and represents a blueprint for evaluating AI algorithms in digital pathology. Artificial intelligence (AI) has shown promise for diagnosing prostate cancer in biopsies. However, results have been limited to individual studies, lacking validation in multinational settings. Competitions have been shown to be accelerators for medical imaging innovations, but their impact is hindered by lack of reproducibility and independent validation. With this in mind, we organized the PANDA challenge-the largest histopathology competition to date, joined by 1,290 developers-to catalyze development of reproducible AI algorithms for Gleason grading using 10,616 digitized prostate biopsies. We validated that a diverse set of submitted algorithms reached pathologist-level performance on independent cross-continental cohorts, fully blinded to the algorithm developers. On United States and European external validation sets, the algorithms achieved agreements of 0.862 (quadratically weighted kappa, 95% confidence interval (CI), 0.840-0.884) and 0.868 (95% CI, 0.835-0.900) with expert uropathologists. Successful generalization across different patient populations, laboratories and reference standards, achieved by a variety of algorithmic approaches, warrants evaluating AI-based Gleason grading in prospective clinical trials.KWF Kankerbestrijding ; Netherlands Organization for Scientific Research (NWO) ; Swedish Research Council European Commission ; Swedish Cancer Society ; Swedish eScience Research Center ; Ake Wiberg Foundation ; Prostatacancerforbundet ; Academy of Finland ; Cancer Foundation Finland ; Google Incorporated ; MICCAI board challenge working group ; Verily Life Sciences ; EIT Health ; Karolinska Institutet ; MICCAI 2020 satellite event team ; ERAPerMe

    Elektronimikroskooppikuvien segmentointi oppivan järjestelmän avulla

    Get PDF
    A great deal of image analysis in today’s materials research is done manually, which can be time consuming and tedious. This thesis is a case study of two image analysis problems in the field of materials science for which automatic methods are developed to aid the analysis process. In the first case, an automatic segmentation method is developed to segment zeolite pores. The method is based on logistic regression combined with sparsity promoting LASSO regularization. It is able to find more pores than humans with better consistency and speed yielding average PAS score of 0.79 and F1 score of 0.89. Much of the error is caused by the fact that humans specify the pore perimeters in various ways, but the method consistently produces similar segmentations. The second case considers automatic segmentation of silver nanoparticles by combining LASSO regularized logistic regression and watershed segmentation. The method is faster than manual segmentation and produces fine segments when particles have little overlap. However, if the amount of overlap is high, the segments are flawed. The performance of the system in terms of PAS metric and F1 score is 0.76 and 0.86, respectively. An interesting property of training based segmentation together with sparsity promoting property is that training data can be collected by anyone. This enables creating an adaptive segmentation software for anyone regardless of image processing experience
    corecore